Extending Contrastive Learning to Unsupervised Coreset Selection

نویسندگان

چکیده

Self-supervised contrastive learning offers a means of informative features from pool unlabeled data. In this paper, we investigate another useful approach. We propose an entirely coreset selection method. regard, learning, one several self-supervised methods, was recently proposed and has consistently delivered the highest performance. This prompted us to choose two leading methods for learning: simple framework visual representations (SimCLR) momentum (MoCo) framework. calculated cosine similarities each example epoch entire duration process subsequently accumulated similarity values obtain score. Our assumption that sample with low would likely behave as coreset. Compared existing labels, our approach reduced cost associated human annotation. study, unsupervised method implemented achieved improvements 1.25% (for CIFAR10), 0.82% SVHN), 0.19% QMNIST) over randomly selected subset size 30%. Furthermore, results are comparable those supervised methods. The differences between above mentioned (forgetting events) were 0.81% on CIFAR10 dataset, −2.08% SVHN dataset (the outperformed method), 0.01% QMNIST at addition, exhibited robustness even if model target not identical (e.g., using ResNet18 ResNet101 model). Lastly, obtained more concrete proof examples highly by showing performance gap non-coreset samples in cross test experiment. observed pair ((testing: non-coreset, training: coreset), (testing: coreset, non-coreset)), i.e. (94.27%, 67.39 %) CIFAR10, (98.24%, 83.30%) SVHN, (99.89%, 93.07%)

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA

Nonlinear independent component analysis (ICA) provides an appealing framework for unsupervised feature learning, but the models proposed so far are not identifiable. Here, we first propose a new intuitive principle of unsupervised deep learning from time series which uses the nonstationary structure of the data. Our learning principle, time-contrastive learning (TCL), finds a representation wh...

متن کامل

Feature Selection for Unsupervised Learning

In this paper, we identify two issues involved in developing an automated feature subset selection algorithm for unlabeled data: the need for finding the number of clusters in conjunction with feature selection, and the need for normalizing the bias of feature selection criteria with respect to dimension. We explore the feature selection problem and these issues through FSSEM (Feature Subset Se...

متن کامل

Extending BDI plan selection to incorporate learning from experience

An important drawback to the popular Belief, Desire, and Intentions (BDI) paradigm is that such systems include no element of learning from experience. We describe a novel BDI execution framework that models context conditions as decision trees, rather than boolean formulae, allowing agents to learn the probability of success for plans based on experience. By using a probabilistic plan selectio...

متن کامل

Guiding Unsupervised Grammar Induction Using Contrastive Estimation∗

We describe a novel training criterion for probabilistic grammar induction models, contrastive estimation [Smith and Eisner, 2005], which can be interpreted as exploiting implicit negative evidence and includes a wide class of likelihood-based objective functions. This criterion is a generalization of the function maximized by the ExpectationMaximization algorithm [Dempster et al., 1977]. CE is...

متن کامل

Applying Unsupervised Learning and Action Selection to Robot Teleoperation

Unsupervised learning and supervised remote teleoperator control for robots may seem an unlikely combination. This paper argues that the combination holds advantages for both parties. The operator would like to “instruct” the robot without any special effort, and then be able to hand over some or all of the tasks to be performed without loss of overall supervisory control. In return, the learni...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2022

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2022.3142758